许多最先进的对抗性培训方法利用对抗性损失的上限来提供安全保障。然而,这些方法需要在每个训练步骤中计算,该步骤不能包含在梯度中的梯度以进行反向化。我们基于封闭形式的对抗性损失的封闭溶液引入了一种新的更具内容性的对抗性培训,可以有效地培养了背部衰退。通过稳健优化的最先进的工具促进了这一界限。我们使用我们的方法推出了两种新方法。第一种方法(近似稳健的上限或arub)使用网络的第一阶近似以及来自线性鲁棒优化的基本工具,以获得可以容易地实现的对抗丢失的近似偏置。第二种方法(鲁棒上限或摩擦)计算对抗性损失的精确上限。在各种表格和视觉数据集中,我们展示了我们更加原则的方法的有效性 - 摩擦比最先进的方法更强大,而是较大的扰动的最新方法,而谷会匹配的性能 - 小扰动的艺术方法。此外,摩擦和灌注速度比标准对抗性培训快(以牺牲内存增加)。重现结果的所有代码都可以在https://github.com/kimvc7/trobustness找到。
translated by 谷歌翻译
对称性是本质上的无所话话,并且由许多物种的视觉系统感知,因为它有助于检测我们环境中的生态重要的物体类。对称感知需要抽象图像区域之间的非局部空间依赖性,并且其底层的神经机制仍然难以捉摸。在本文中,我们评估了深度神经网络(DNN)架构关于从示例学习对称感知的任务。我们证明了在对象识别任务上建模人类性能的前馈DNN,不能获取对称的一般概念。即使当DNN被重建以捕获非局部空间依赖项,例如通过`扩张的“卷曲和最近引入的”变压器“设计,也是如此。相比之下,我们发现经常性架构能够通过将非局部空间依赖性分解成一系列本地操作来学习对称性,这对于新颖的图像来说是可重复使用的。这些结果表明,经常性联系可能在人工系统中对称性感知中发挥重要作用,也可能是生物学的。
translated by 谷歌翻译
对象识别和视点估计位于视觉理解的核心。最近的作品表明,卷积神经网络(CNNS)未能概括为分发(OOD)类别 - 观点组合,即。在训练期间没有看到的组合。在本文中,我们通过评估培训的CNNS来调查何时以及如何以及如何以及如何进行分类对象类别和3D观点,并识别有助于这种ood泛化的神经机制。我们表明,增加了分布式组合的数量(即数据分集),即使具有相同数量的培训数据,也会大大提高了oo ood组合的概率。我们比较学习类别和观点在单独的和共享网络架构中,并观察分销和ood组合的显着不同趋势,即。虽然共享网络有用的分配有用,但单独的网络在ood组合中显着优于共享的网络。最后,我们证明,通过专业化的神经机制,即。两种类型神经元的出现 - 神经元选择类别和不变性观点,反之亦然。
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown great potential in the field of graph representation learning. Standard GNNs define a local message-passing mechanism which propagates information over the whole graph domain by stacking multiple layers. This paradigm suffers from two major limitations, over-squashing and poor long-range dependencies, that can be solved using global attention but significantly increases the computational cost to quadratic complexity. In this work, we propose an alternative approach to overcome these structural limitations by leveraging the ViT/MLP-Mixer architectures introduced in computer vision. We introduce a new class of GNNs, called Graph MLP-Mixer, that holds three key properties. First, they capture long-range dependency and mitigate the issue of over-squashing as demonstrated on the Long Range Graph Benchmark (LRGB) and the TreeNeighbourMatch datasets. Second, they offer better speed and memory efficiency with a complexity linear to the number of nodes and edges, surpassing the related Graph Transformer and expressive GNN models. Third, they show high expressivity in terms of graph isomorphism as they can distinguish at least 3-WL non-isomorphic graphs. We test our architecture on 4 simulated datasets and 7 real-world benchmarks, and show highly competitive results on all of them.
translated by 谷歌翻译
This paper is devoted to the numerical resolution of McKean-Vlasov control problems via the class of mean-field neural networks introduced in our companion paper [25] in order to learn the solution on the Wasserstein space. We propose several algorithms either based on dynamic programming with control learning by policy or value iteration, or backward SDE from stochastic maximum principle with global or local loss functions. Extensive numerical results on different examples are presented to illustrate the accuracy of each of our eight algorithms. We discuss and compare the pros and cons of all the tested methods.
translated by 谷歌翻译
This paper describes the system developed at the Universitat Polit\`ecnica de Catalunya for the Workshop on Machine Translation 2022 Sign Language Translation Task, in particular, for the sign-to-text direction. We use a Transformer model implemented with the Fairseq modeling toolkit. We have experimented with the vocabulary size, data augmentation techniques and pretraining the model with the PHOENIX-14T dataset. Our system obtains 0.50 BLEU score for the test set, improving the organizers' baseline by 0.38 BLEU. We remark the poor results for both the baseline and our system, and thus, the unreliability of our findings.
translated by 谷歌翻译
This paper studies the infinite-width limit of deep linear neural networks initialized with random parameters. We obtain that, when the number of neurons diverges, the training dynamics converge (in a precise sense) to the dynamics obtained from a gradient descent on an infinitely wide deterministic linear neural network. Moreover, even if the weights remain random, we get their precise law along the training dynamics, and prove a quantitative convergence result of the linear predictor in terms of the number of neurons. We finally study the continuous-time limit obtained for infinitely wide linear neural networks and show that the linear predictors of the neural network converge at an exponential rate to the minimal $\ell_2$-norm minimizer of the risk.
translated by 谷歌翻译
Recent advances in deep learning models for sequence classification have greatly improved their classification accuracy, specially when large training sets are available. However, several works have suggested that under some settings the predictions made by these models are poorly calibrated. In this work we study binary sequence classification problems and we look at model calibration from a different perspective by asking the question: Are deep learning models capable of learning the underlying target class distribution? We focus on sparse sequence classification, that is problems in which the target class is rare and compare three deep learning sequence classification models. We develop an evaluation that measures how well a classifier is learning the target class distribution. In addition, our evaluation disentangles good performance achieved by mere compression of the training sequences versus performance achieved by proper model generalization. Our results suggest that in this binary setting the deep-learning models are indeed able to learn the underlying class distribution in a non-trivial manner, i.e. by proper generalization beyond data compression.
translated by 谷歌翻译
The combination of machine learning models with physical models is a recent research path to learn robust data representations. In this paper, we introduce p$^3$VAE, a generative model that integrates a perfect physical model which partially explains the true underlying factors of variation in the data. To fully leverage our hybrid design, we propose a semi-supervised optimization procedure and an inference scheme that comes along meaningful uncertainty estimates. We apply p$^3$VAE to the semantic segmentation of high-resolution hyperspectral remote sensing images. Our experiments on a simulated data set demonstrated the benefits of our hybrid model against conventional machine learning models in terms of extrapolation capabilities and interpretability. In particular, we show that p$^3$VAE naturally has high disentanglement capabilities. Our code and data have been made publicly available at https://github.com/Romain3Ch216/p3VAE.
translated by 谷歌翻译
在过去的几年中,神经网络(NN)从实验室环境中发展为许多现实世界中的最新问题。结果表明,NN模型(即它们的重量和偏见)在训练过程中的重量空间中的独特轨迹上演变。随后,这种神经网络模型(称为模型动物园)的人群将在体重空间中形成结构。我们认为,这些结构的几何形状,曲率和平滑度包含有关训练状态的信息,并且可以揭示单个模型的潜在特性。使用这种模型动物园,可以研究(i)模型分析的新方法,(ii)发现未知的学习动力学,(iii)学习此类人群的丰富表示形式,或(iv)利用模型动物园来用于NN权重和NN权重的生成模型偏见。不幸的是,缺乏标准化模型动物园和可用的基准可以显着增加摩擦,以进一步研究NNS人群。通过这项工作,我们发布了一个新颖的模型动物园数据集,其中包含系统生成和多样化的NN模型种群,以进行进一步研究。总共提出的模型动物园数据集基于八个图像数据集,由27个模型动物园组成,该模型动物园训练有不同的超参数组合,包括50'360唯一的NN型号以及其稀疏双胞胎,导致超过3'844'360收集的型号。 。此外,对于模型动物园数据,我们提供了对动物园的深入分析,并为多个下游任务提供了基准。该数据集可在www.modelzoos.cc上找到。
translated by 谷歌翻译